-
Notifications
You must be signed in to change notification settings - Fork 814
[slimtensor] Add common_shims_slim with basic property getters #16454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/gasoonjia/96/base
Are you sure you want to change the base?
Conversation
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16454
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit a233c30 with merge base 15ad846 ( NEW FAILURE - The following job has failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
…ters"
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
…ters"
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
…ters"
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
…ters"
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
…ters"
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
…ters"
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
…ters"
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
…ters"
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions
Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.
Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
[ghstack-poisoned]
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * #16447 * #16446 * __->__ #16724 Copy CUDAGuard and CUDAStreamGuard from cuda/runtime/ to aoti/slim/cuda/ to support slimtensor requirement while get rid of potential circular dependency: - cuda_backend/main_functionalities -> aoti/slimtensor -> cuda_backend/cuda_guard This change: - copy guard.h, guard.cpp and test files from backend/cuda_backend to backend/aoti/slim/cuda/ Differential Revision: [D91056808](https://our.internmc.facebook.com/intern/diff/D91056808/)
…v2 (#16446) Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * #16447 * __->__ #16446 * #16724 Add SlimTensor-based implementations of AOTI shim functions for tensor creation: 1. `aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning SlimTensor that wraps existing memory using the `from_blob()` factory Both functions support CPU and CUDA devices and handle all 7 SlimTensor dtypes. Also add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based shim implementations for working on new API while not impact the current pipeline. Will use memory_slim.{h/cpp} to replace current memory.{h/cpp} when everything has been set up. Differential Revision: [D90126247](https://our.internmc.facebook.com/intern/diff/D90126247/)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * __->__ #16447 * #16446 * #16724 Add SlimTensor-based implementations of AOTI shim functions for tensor creation: `aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning SlimTensor that wraps existing memory using the `from_blob()` factory Both functions support CPU and CUDA devices and handle all 7 SlimTensor dtypes. Changes: - Add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based shim implementations - Add `runtime_shims_slim` library target to TARGETS with `CUDA_AVAILABLE=1` preprocessor flag - Add `cuda_shim_slim_cpp_unittest()` function for SlimTensor test targets Differential Revision: [D90126244](https://our.internmc.facebook.com/intern/diff/D90126244/)
Stack from ghstack (oldest at bottom):
Add SlimTensor-based implementations of basic property getter AOTI shim functions:
aoti_torch_get_data_ptr()- Returns pointer to tensor dataaoti_torch_get_sizes()- Returns pointer to sizes array (SlimTensor stores int64_t directly)aoti_torch_get_strides()- Returns pointer to strides array (SlimTensor stores int64_t directly)aoti_torch_get_dtype()- Returns the scalar type as int32_taoti_torch_get_dim()- Returns the number of dimensionsKey design:
#ifdef CUDA_AVAILABLEconditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.Differential Revision: D90126254